Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor Go-DI template generation to use location expressions #31424

Open
wants to merge 60 commits into
base: main
Choose a base branch
from

Conversation

grantseltzer
Copy link
Member

@grantseltzer grantseltzer commented Nov 25, 2024

What does this PR do?

This is a major refactor of the way Go-DI generates bpf programs. Instead
of generating bpf programs via templates for specific types, it breaks
down captures into basic building blocks of operations such as reading
from registers/stack, dereferencing, adding offsets, writing to output,
etc...

Templates are now these basic operations and as a result we can express
much more complex captures, such as pointers to pointers, slices of strings
or any combination of types that were previously supported.

Motivation

The previous iteration of Go-DI failed to be able to express complex types such as pointers to pointers. We didn't have a way of expressing any amount of multiple dereferences. This greatly increases the supported features/types for our customers to instrument.

Describe how to test/QA your changes

Run e2e tests

Possible Drawbacks / Trade-offs

Additional Notes

  • There is still one fix I need to push to this branch. It involves the way length is captured and encoded for embedded slices and strings.
  • I will be pushing more comments as well.
  • Have linting issues to fix

@grantseltzer grantseltzer added team/dynamic-instrumentation Dynamic Instrumentation qa/no-code-change No code change in Agent code requiring validation labels Nov 25, 2024
@grantseltzer grantseltzer requested review from a team as code owners November 25, 2024 14:54
@github-actions github-actions bot added the long review PR is complex, plan time to review it label Nov 25, 2024
@grantseltzer grantseltzer removed the request for review from cimi November 25, 2024 14:56
Copy link

cit-pr-commenter bot commented Nov 25, 2024

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: ac7999a0-6563-4b1b-8fe5-06a0a6b5be8c

Baseline: e49efd5
Comparison: 63fc80e
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gate_logs % cpu utilization +2.56 [-0.43, +5.55] 1 Logs
quality_gate_idle_all_features memory utilization +1.18 [+1.05, +1.31] 1 Logs bounds checks dashboard
otel_to_otel_logs ingress throughput +1.01 [+0.33, +1.69] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization +0.43 [-0.29, +1.16] 1 Logs
file_to_blackhole_500ms_latency egress throughput +0.10 [-0.66, +0.87] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput +0.09 [-0.71, +0.89] 1 Logs
file_to_blackhole_100ms_latency egress throughput +0.05 [-0.60, +0.70] 1 Logs
file_to_blackhole_300ms_latency egress throughput +0.05 [-0.58, +0.68] 1 Logs
file_to_blackhole_0ms_latency egress throughput +0.04 [-0.80, +0.87] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.01, +0.02] 1 Logs
uds_dogstatsd_to_api ingress throughput +0.00 [-0.11, +0.11] 1 Logs
file_to_blackhole_0ms_latency_http1 egress throughput -0.05 [-0.96, +0.86] 1 Logs
file_to_blackhole_1000ms_latency egress throughput -0.16 [-0.94, +0.62] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput -0.23 [-0.69, +0.22] 1 Logs
quality_gate_idle memory utilization -0.24 [-0.28, -0.19] 1 Logs bounds checks dashboard
tcp_syslog_to_blackhole ingress throughput -0.53 [-0.60, -0.46] 1 Logs
file_tree memory utilization -0.55 [-0.67, -0.43] 1 Logs

Bounds Checks: ❌ Failed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_100ms_latency lost_bytes 5/10
file_to_blackhole_0ms_latency lost_bytes 9/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs lost_bytes 10/10
quality_gate_logs memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.

@grantseltzer grantseltzer force-pushed the grantseltzer/DEBUG-2982-multiple-dereference-support-2 branch 2 times, most recently from b3edbd7 to 924fb7f Compare December 2, 2024 18:52
@grantseltzer grantseltzer force-pushed the grantseltzer/DEBUG-2982-multiple-dereference-support-2 branch 2 times, most recently from e1ffc3a to 3869f77 Compare December 11, 2024 17:12
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Dec 11, 2024

Package size comparison

Comparison with ancestor e49efd5a1b34f2f86d5c5b7d46d41c95a9c90fe7

Diff per package
package diff status size ancestor threshold
datadog-agent-amd64-deb 0.04MB ⚠️ 1265.97MB 1265.93MB 140.00MB
datadog-iot-agent-amd64-deb 0.00MB 113.29MB 113.29MB 10.00MB
datadog-dogstatsd-amd64-deb 0.00MB 78.41MB 78.41MB 10.00MB
datadog-heroku-agent-amd64-deb 0.00MB 526.65MB 526.65MB 70.00MB
datadog-agent-x86_64-rpm 0.04MB ⚠️ 1275.20MB 1275.16MB 140.00MB
datadog-agent-x86_64-suse 0.04MB ⚠️ 1275.20MB 1275.16MB 140.00MB
datadog-iot-agent-x86_64-rpm 0.00MB 113.35MB 113.35MB 10.00MB
datadog-iot-agent-x86_64-suse 0.00MB 113.35MB 113.35MB 10.00MB
datadog-dogstatsd-x86_64-rpm 0.00MB 78.49MB 78.49MB 10.00MB
datadog-dogstatsd-x86_64-suse 0.00MB 78.49MB 78.49MB 10.00MB
datadog-agent-arm64-deb 0.06MB ⚠️ 1000.98MB 1000.93MB 140.00MB
datadog-iot-agent-arm64-deb 0.00MB 108.76MB 108.76MB 10.00MB
datadog-dogstatsd-arm64-deb 0.00MB 55.64MB 55.64MB 10.00MB
datadog-agent-aarch64-rpm 0.06MB ⚠️ 1010.20MB 1010.14MB 140.00MB
datadog-iot-agent-aarch64-rpm 0.00MB 108.83MB 108.83MB 10.00MB

Decision

⚠️ Warning

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Dec 11, 2024

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv aws.create-vm --pipeline-id=50960741 --os-family=ubuntu

Note: This applies to commit 26663d4

This is a major refactor of the way Go-DI generates bpf programs. Instead
of generating bpf programs via templates for specific types, it breaks
down captures into basic building blocks of operations such as reading
from registers/stack, dereferencing, adding offsets, writing to output,
etc...

Templates are now these basic operations and as a result we can express
much more complex captures, such as pointers to pointers, slices of strings
or any combination of types that were previously supported.

Signed-off-by: grantseltzer <[email protected]>
Signed-off-by: grantseltzer <[email protected]>
…instead of right before the values. Still need to update event parsing code

Signed-off-by: grantseltzer <[email protected]>
Signed-off-by: grantseltzer <[email protected]>
This changes the logic for collection limit labels to create a new label
before every slice element and jump to it accordingly. This is to avoid
collissions such as in the case of embedded slices.

Signed-off-by: grantseltzer <[email protected]>
Signed-off-by: grantseltzer <[email protected]>
}
for i := range parameters {
if i >= len(funcMetadata.Parameters) {
return errors.New("parameter metadata does not line up with parameter itself")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth including information about the extra parameter in the error?

Should we consider the analysis as failed when this doesn't line up or just take the parameters we have?

If we fail here we can have the length check outside of the inner loop.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You bring up a good point. As you also pointed out, the error wasn't being checked anyway. It all relates to a larger comprehensive approach to how we want to handle errors that occur for instrumenting a single parameter, or portion of a single parameter to be fault tolerant. I've changed the returned error to be logs so they're at least surfaced and attempt to continue generating location expressions. I don't expect the errors to occur here as if there was an issue with DWARF it'd be more likely an error would occur earlier in execution. Regardless it's something that can come up and we should handle as part of a larger resiliency strategy.

pkg/dynamicinstrumentation/diconfig/binary_inspection.go Outdated Show resolved Hide resolved
@@ -80,6 +81,7 @@ func AnalyzeBinary(procInfo *ditypes.ProcessInfo) error {
// Use the result from InspectWithDWARF to populate the locations of parameters
for functionName, functionMetadata := range r.Functions {
putLocationsInParams(functionMetadata.Parameters, r.StructOffsets, procInfo.TypeMap.Functions, functionName)
populateLocationExpressions(r.Functions, procInfo)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are ignoring the error here

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment you made in the function body.

ExpectedParameters []*ditypes.Parameter
}{
{
FuncName: "github.com/DataDog/datadog-agent/pkg/dynamicinstrumentation/testutil/sample.test_single_int",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can add more complex tests here.

Comment on lines -45 to +42
log.Tracef("event dropped by rate limit. Probe %s\t(%d dropped events out of %d)\n",
return nil, log.Errorf("event dropped by rate limit. Probe %s\t(%d dropped events out of %d)\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will create a lot of noise in the logs, we should not error log every rate limited event.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error is logged at the trace level (for normal events, error level for config), do you think that's too noisy still?

pkg/dynamicinstrumentation/eventparser/event_parser.go Outdated Show resolved Hide resolved
Comment on lines 110 to 111
if len(buffer) > bufferIndex+int(paramDefinition.Size) {
paramDefinition.ValueStr = string(buffer[bufferIndex : bufferIndex+int(paramDefinition.Size)])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is incorrect, it will lose data if the size aligns exactly to the size of the buffer. It should be len(buffer) >= bufferIndex+int(paramDefinition.Size), right?

There are multiple places in this file with the same problem.

Copy link
Member Author

@grantseltzer grantseltzer Dec 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sure this has never happened in our testing of DI but you were correct, good catch 👍 I'll write a test case as well.

I misremembered if Go would panic if you slice with the length of the slice.

i.e.:

x := []int{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
fmt.Println(len(x), x[:10]) // ok
fmt.Println(x[10])          // not ok

pkg/dynamicinstrumentation/eventparser/event_parser.go Outdated Show resolved Hide resolved
pkg/dynamicinstrumentation/eventparser/event_parser.go Outdated Show resolved Hide resolved
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Dec 19, 2024

[Fast Unit Tests Report]

On pipeline 51750040 (CI Visibility). The following jobs did not run any unit tests:

Jobs:
  • tests_deb-arm64-py3
  • tests_deb-x64-py3
  • tests_flavor_dogstatsd_deb-x64
  • tests_flavor_heroku_deb-x64
  • tests_flavor_iot_deb-x64
  • tests_rpm-arm64-py3
  • tests_rpm-x64-py3
  • tests_windows-x64

If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-devx-help

@grantseltzer grantseltzer force-pushed the grantseltzer/DEBUG-2982-multiple-dereference-support-2 branch from b2892a3 to df484f3 Compare December 19, 2024 21:43
@grantseltzer grantseltzer force-pushed the grantseltzer/DEBUG-2982-multiple-dereference-support-2 branch from be4efa2 to 1f64aff Compare December 20, 2024 19:29
@grantseltzer grantseltzer force-pushed the grantseltzer/DEBUG-2982-multiple-dereference-support-2 branch from 224e16a to f0b61fb Compare December 21, 2024 03:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/system-probe long review PR is complex, plan time to review it qa/no-code-change No code change in Agent code requiring validation team/dynamic-instrumentation Dynamic Instrumentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants